Cross View Gait Recognition Using Correlation Strength
نویسندگان
چکیده
Gait is a behavioural biometric particularly useful for non-intrusive and/or non-cooperative person identification from a distance in unconstrained public spaces. However, such environments also increase the difficulties in gait recognition compared to a more controlled one with constantly known view angle. This is largely because various factors can affect gait including people walking in different clothes, under different carrying conditions, at variable speed, in different shoes and from arbitrary views. In particular, changes in view angle pose one of the biggest challenges to gait recognition as it can change significantly the available visual features for matching. Recently a number of approaches [2, 3] based on view transformation have been presented which have the potential to cope with large view angle changes and do not rely on camera calibration. These approaches aim to learn a mapping relationship between gait features of the same subject observed across views. When matching gait sequences from different views, the gait features are mapped/reconstructed into the same view before a distance measure is computed for matching. An advantage of these methods is that they have better ability to cope with large view angle change compared to earlier works. However, a view transformation based method also has a number of drawbacks 1) it suffers from degeneracies and singularities caused by features visible in one view but not in the other when the view angle difference is large. 2) The reconstruction process propagates the noise present in the gait features in one view to another thus decreasing recognition performance. In this paper we propose a novel approach to cross view gait recognition by addressing the problems associated with the view transformation model. Specifically we model the correlation of gait sequences from different views using Canonical Correlation Analysis (CCA). A CCA model projects gait sequences from two views into two different subspaces such that they are maximally correlated. Similar to the existing view transformation methods, the CCA model also captures the mapping relationship between gait features of different views, albeit implicitly. However, rather than reconstructing gait features in the same view and matching them using a distance measure, we use the CCA correlation strengths directly to match two gait sequences. This brings out two key advantages: 1) by projecting the gait features into the two subspaces with maximal correlation, features that become invisible across views are automatically identified and removed. 2) without reconstruction in the original gait feature space, our approach is more robust against feature noise. In this paper we also address the problem of view angle recognition using Gaussian Process (GP) classification in order to build a complete gait recognition system with probe sequence view angle unknown. This differs from existing approaches which assume the probe view angle is known. Experiments are carried out to demonstrate that 1) our GP classification based view angle recognition method effectively identifies the view angle and is superior when compared to an SVM based method; 2) The gait recognition performance of our method significantly outperform those of the existing view transformation models [2, 3] even when they assume known probe sequence view angle. We compute two gait representations, one for view angle recognition and the other for cross-view gait recognition. For view recognition, gait sequences are represented using Truncated Gait Energy Images (TGEI). TGEI (Fig.1(bottom row))is simply Gait Energy Image (GEI) without its top part (head & torso) and is generated by only taking the bottom one third of the GEI. Gait Flow Images (GFI) [1] are used as a gait feature for cross view gait recognition. GFIs provide more discriminative representation for identity recognition compared to GEI by looking at multiple independent motion of different body parts during a gait cycle [1]. It is robust against various covariate conditions such as carrying and clothing [1]. (a) 36◦ (b) 54◦ (c) 72◦ (d) 90◦ (e) 108◦ (f) 126◦ (g) 144◦ Figure 1: Top row: GEIs of a same subject for different views. Bottom row: TGEIs obtained from the GEIs in the top row.
منابع مشابه
Curve spreads--a biometric from front-view gait video
We introduce the curve spread as an efficient descriptor of front-view gait of humans walking towards a camera. The curve spread is a compact two-dimensional representation of the time-variations of a moving body outline. Most gait biometrics employ features derived from side-view videos because limb swings are more pronounced from the side than from the front. However, side-view observations a...
متن کاملGait Recognition by Fusing Direct Cross-view Matching Scores for Criminal Investigation
We focus on gait recognition for criminal investigation. In criminal investigation, person authentication is performed by comparing target data at the crime scene and multiple gait data with slightly different views from that of the target data. For this task, we propose fusion of direct cross-view matching. Cross-view matching generally produces worse result than those of same-view matching wh...
متن کاملCross-view and multi-view gait recognitions based on view transformation model using multi-layer perceptron
Gait has been shown to be an efficient biometric feature for human identification at a distance from a camera. However, performance of gait recognition can be affected by various problems. One of the serious problems is view change which can be caused by change of walking direction and/or change of camera viewpoint. This leads to a consequent difficulty of across-view gait recognition where pro...
متن کاملA Novel Approach on Silhouette Based Human Motion Analysis for Gait Recognition
This paper presents a novel view independent approach on silhouette based human motion analysis for gait recognition applications. Spatio-temporal 1-D signals based on the differences between the outer of binarized silhouette of a motion object and a bounding box placed around silhouette are chosen as the basic image features called the distance vectors. The distance vectors are extracted using...
متن کاملOn Probabilistic Combination of Face and Gait Cues for Identification
We approach the task of person identification based on face and gait cues. The cues are derived from multiple simultaneous camera views, combined through the visual hull algorithm to create imagery in canonical pose prior to recognition. These view-normalized sequences, containing frontal images of face and profile silhouettes, are separately used for face and gait recognition, and the results ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010